237 research outputs found
Hyperspectral Super-Resolution with Coupled Tucker Approximation: Recoverability and SVD-based algorithms
We propose a novel approach for hyperspectral super-resolution, that is based
on low-rank tensor approximation for a coupled low-rank multilinear (Tucker)
model. We show that the correct recovery holds for a wide range of multilinear
ranks. For coupled tensor approximation, we propose two SVD-based algorithms
that are simple and fast, but with a performance comparable to the
state-of-the-art methods. The approach is applicable to the case of unknown
spatial degradation and to the pansharpening problem.Comment: IEEE Transactions on Signal Processing, Institute of Electrical and
Electronics Engineers, in Pres
On-line blind unmixing for hyperspectral pushbroom imaging systems
International audienceIn this paper, the on-line hyperspectral image blind unmixing is addressed. Inspired by the Incremental Non-negative Matrix Factorization (INMF) method, we propose an on-line NMF which is adapted to the acquisition scheme of a pushbroom imager. Because of the non-uniqueness of the NMF model, a minimum volume constraint on the endmembers is added allowing to reduce the set of admissible solutions. This results in a stable algorithm yielding results similar to those of standard off-line NMF methods, but drastically reducing the computation time. The algorithm is applied to wood hyperspectral images showing that such a technique is effective for the on-line prediction of wood piece rendering after finishing. Index Terms— Hyperspectral imaging, Pushbroom imager, On-line Non-negative Matrix Factorization, Minimum volume constraint
Homotopy based algorithms for -regularized least-squares
Sparse signal restoration is usually formulated as the minimization of a
quadratic cost function , where A is a dictionary and x is an
unknown sparse vector. It is well-known that imposing an constraint
leads to an NP-hard minimization problem. The convex relaxation approach has
received considerable attention, where the -norm is replaced by the
-norm. Among the many efficient solvers, the homotopy
algorithm minimizes with respect to x for a
continuum of 's. It is inspired by the piecewise regularity of the
-regularization path, also referred to as the homotopy path. In this
paper, we address the minimization problem for a
continuum of 's and propose two heuristic search algorithms for
-homotopy. Continuation Single Best Replacement is a forward-backward
greedy strategy extending the Single Best Replacement algorithm, previously
proposed for -minimization at a given . The adaptive search of
the -values is inspired by -homotopy. Regularization
Path Descent is a more complex algorithm exploiting the structural properties
of the -regularization path, which is piecewise constant with respect
to . Both algorithms are empirically evaluated for difficult inverse
problems involving ill-conditioned dictionaries. Finally, we show that they can
be easily coupled with usual methods of model order selection.Comment: 38 page
On the properties of the solution path of the constrained and penalized L2-L0 problems
12 pagesTechnical report on the properties of the L0-constrained least-square minimization problem and the L0-penalized least-square minimization problem: domain of optimization, notion of solution path, properties of the "penalized" solution path..
Tensor-based framework for training flexible neural networks
Activation functions (AFs) are an important part of the design of neural
networks (NNs), and their choice plays a predominant role in the performance of
a NN. In this work, we are particularly interested in the estimation of
flexible activation functions using tensor-based solutions, where the AFs are
expressed as a weighted sum of predefined basis functions. To do so, we propose
a new learning algorithm which solves a constrained coupled matrix-tensor
factorization (CMTF) problem. This technique fuses the first and zeroth order
information of the NN, where the first-order information is contained in a
Jacobian tensor, following a constrained canonical polyadic decomposition
(CPD). The proposed algorithm can handle different decomposition bases. The
goal of this method is to compress large pretrained NN models, by replacing
subnetworks, {\em i.e.,} one or multiple layers of the original network, by a
new flexible layer. The approach is applied to a pretrained convolutional
neural network (CNN) used for character classification.Comment: 26 pages, 13 figure
On factorization of rank-one auto-correlation matrix polynomials
This article characterizes the rank-one factorization of auto-correlation
matrix polynomials. We establish a sufficient and necessary uniqueness
condition for uniqueness of the factorization based on the greatest common
divisor (GCD) of multiple polynomials. In the unique case, we show that the
factorization can be carried out explicitly using GCDs. In the non-unique case,
the number of non-trivially different factorizations is given and all solutions
are enumerated
- …